安全至关重要的应用中神经网络(NNS)的患病率的增加,要求采用证明安全行为的方法。本文提出了一种向后的可及性方法,以安全验证神经反馈循环(NFLS),即具有NN控制策略的闭环系统。尽管最近的作品集中在远程达到NFL的安全认证策略上,但落后性能比远期策略具有优势,尤其是在避免障碍的情况下。先前的工作已经开发了用于无NNS系统的向后可及性分析的技术,但是由于其激活功能的非线性,反馈回路中的NNS存在唯一的问题,并且由于NN模型通常不可逆转。为了克服这些挑战,我们使用现有的NN分析工具有效地找到了对反射(BP)集的过度评估,即NN控制策略将将系统驱动到给定目标集的状态集。我们介绍了用于计算以馈电NN表示的控制策略的线性和非线性系统的BP过度评估的框架,并提出了计算有效的策略。我们使用各种模型的数值结果来展示所提出的算法,包括6D系统的安全认证。
translated by 谷歌翻译
Active galactic nuclei (AGN) are supermassive black holes with luminous accretion disks found in some galaxies, and are thought to play an important role in galaxy evolution. However, traditional optical spectroscopy for identifying AGN requires time-intensive observations. We train a convolutional neural network (CNN) to distinguish AGN host galaxies from non-active galaxies using a sample of 210,000 Sloan Digital Sky Survey galaxies. We evaluate the CNN on 33,000 galaxies that are spectrally classified as composites, and find correlations between galaxy appearances and their CNN classifications, which hint at evolutionary processes that affect both galaxy morphology and AGN activity. With the advent of the Vera C. Rubin Observatory, Nancy Grace Roman Space Telescope, and other wide-field imaging telescopes, deep learning methods will be instrumental for quickly and reliably shortlisting AGN samples for future analyses.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks. Skill learning offers one way of identifying these regularities by decomposing pre-collected experiences into a sequence of skills. A popular approach to skill learning is maximizing the likelihood of the pre-collected experience with latent variable models, where the latent variables represent the skills. However, there are often many solutions that maximize the likelihood equally well, including degenerate solutions. To address this underspecification, we propose a new objective that combines the maximum likelihood objective with a penalty on the description length of the skills. This penalty incentivizes the skills to maximally extract common structures from the experiences. Empirically, our objective learns skills that solve downstream tasks in fewer samples compared to skills learned from only maximizing likelihood. Further, while most prior works in the offline multi-task setting focus on tasks with low-dimensional observations, our objective can scale to challenging tasks with high-dimensional image observations.
translated by 谷歌翻译
A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. To mitigate this risk, we propose the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks while retaining good performance on desired tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, showing that it can largely prevent a BERT-based model from learning to perform gender identification without harming the model's ability to perform profession classification. We conclude with a discussion of future directions.
translated by 谷歌翻译
Large language models have recently shown promising progress in mathematical reasoning when fine-tuned with human-generated sequences walking through a sequence of solution steps. However, the solution sequences are not formally structured and the resulting model-generated sequences may not reflect the kind of systematic reasoning we might expect an expert human to produce. In this paper, we study how to build stronger reasoning capability in language models using the idea of relational abstractions. We introduce new types of sequences that more explicitly provide an abstract characterization of the transitions through intermediate solution steps to the goal state. We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy, and models that are trained to produce such sequences solve problems better than those that are trained with previously used human-generated sequences and other baselines. Our work thus takes several steps toward elucidating and improving how language models perform on tasks requiring multi-step mathematical reasoning.
translated by 谷歌翻译
该项目描述了机器学习和图像互联网技术在评估接受康复或治疗的个人的下肢强度的应用。具体而言,它试图通过椅子上的传感器来衡量和评估个体的进度,并通过Google GPU Tensorflow Colab处理数据。压力传感器附着在椅子上的各个位置,包括但不限于座位区域,靠背,手静止和腿部。从个人执行静坐过渡和站立式过渡的个人的传感器数据提供了有关椅子上压力分布和振动运动的时间序列数据集。然后可以将数据集和定时信息送入机器学习模型中,以估计运动各个阶段的相对优势。
translated by 谷歌翻译
在许多临床背景下,检测所有病变对于评估疾病活动至关重要。尽管获取分割标签的耗时性,但标准方法仍将病变检测作为分割问题。在本文中,我们提出了一种仅依赖点标签的病变检测方法。我们的模型通过热图回归训练,可以以概率方式检测可变数量的病变。实际上,我们提出的后处理方法提供了一种直接估计病变存在不确定性的可靠方法。GAD病变检测的实验结果表明,与昂贵的分割标签的培训相比,我们的基于点的方法具有竞争性。最后,我们的检测模型为分割提供了合适的预训练。仅在17个细分样本上进行微调时,我们实现了与完整数据集的培训相当的性能。
translated by 谷歌翻译
利用许多离线机器人数据来源需要努力处理此类数据的异质性。在本文中,我们关注异质性的一个特定方面:从不同控制频率收集的离线数据学习。在整个实验室中,控制器的离散化,传感器的采样率以及对目标任务的需求可能会有所不同,从而导致聚合数据集中的频率混合在一起。我们研究离线增强学习(RL)算法如何在训练过程中使用频率混合的数据。我们观察到,$ Q $价值以不同的离散率以不同的速度传播,从而导致了离线RL的许多学习挑战。我们提出了一个简单而有效的解决方案,该解决方案可以在$ Q $值更新的速率上执行一致性,以稳定学习。通过缩放$ n $ n $ n $步骤的$ n $的价值,并具有离散化的大小,我们有效地平衡了$ q $ - 价值传播,从而导致更稳定的收敛性。在三个模拟的机器人控制问题上,我们从经验上发现,这种简单的方法的平均混合量超过50%。
translated by 谷歌翻译
即使是最大的神经网络也会出错,随着世界的变化,曾经纠正的预测可能变得无效。模型编辑器对基础模型(预训练)模型的行为进行本地更新,以注入更新的知识或纠正不良行为。现有的模型编辑已经显示出希望,但也没有足够的表现力:他们难以准确地对编辑的预期范围进行建模(受编辑影响的示例),从而导致与编辑相关的测试输入的预测不准确,并且经常在之后完全失败。许多编辑。作为一个较高容量的替代方案,我们建议使用检索型反面模型(SERAC)提出半参数编辑,该模型(SERAC)存储在明确的内存中,并学会对它们进行推理以根据需要调节基本模型的预测。为了实现对模型编辑器的更严格评估,我们介绍了三个具有挑战性的语言模型编辑问题,基于问题回答,事实检查和对话生成。我们发现,只有SERAC才能在所有三个问题上实现高性能,从而超过了现有的方法,可以通过大量利润进行模型编辑。代码,数据和其他项目信息将在https://sites.google.com/view/serac-editing上提供。
translated by 谷歌翻译